27,729 research outputs found
Drone Shadow Tracking
Aerial videos taken by a drone not too far above the surface may contain the
drone's shadow projected on the scene. This deteriorates the aesthetic quality
of videos. With the presence of other shadows, shadow removal cannot be
directly applied, and the shadow of the drone must be tracked. Tracking a
drone's shadow in a video is, however, challenging. The varying size, shape,
change of orientation and drone altitude pose difficulties. The shadow can also
easily disappear over dark areas. However, a shadow has specific properties
that can be leveraged, besides its geometric shape. In this paper, we
incorporate knowledge of the shadow's physical properties, in the form of
shadow detection masks, into a correlation-based tracking algorithm. We capture
a test set of aerial videos taken with different settings and compare our
results to those of a state-of-the-art tracking algorithm.Comment: 5 pages, 4 figure
Balanced Quantization: An Effective and Efficient Approach to Quantized Neural Networks
Quantized Neural Networks (QNNs), which use low bitwidth numbers for
representing parameters and performing computations, have been proposed to
reduce the computation complexity, storage size and memory usage. In QNNs,
parameters and activations are uniformly quantized, such that the
multiplications and additions can be accelerated by bitwise operations.
However, distributions of parameters in Neural Networks are often imbalanced,
such that the uniform quantization determined from extremal values may under
utilize available bitwidth. In this paper, we propose a novel quantization
method that can ensure the balance of distributions of quantized values. Our
method first recursively partitions the parameters by percentiles into balanced
bins, and then applies uniform quantization. We also introduce computationally
cheaper approximations of percentiles to reduce the computation overhead
introduced. Overall, our method improves the prediction accuracies of QNNs
without introducing extra computation during inference, has negligible impact
on training speed, and is applicable to both Convolutional Neural Networks and
Recurrent Neural Networks. Experiments on standard datasets including ImageNet
and Penn Treebank confirm the effectiveness of our method. On ImageNet, the
top-5 error rate of our 4-bit quantized GoogLeNet model is 12.7\%, which is
superior to the state-of-the-arts of QNNs
- …